LAZY TRAINING: INTERACTIVE CLASSIFICATION LEARNING by

نویسندگان

  • Michael Rimer
  • Michael A. Goodrich
  • Tony R. Martinez
چکیده

LAZY TRAINING: INTERACTIVE CLASSIFICATION LEARNING Michael Rimer Department of Computer Science Master of Science Backpropagation, similar to most learning algorithms that can form complex decision surfaces, is prone to overfitting. This work presents the paradigm of Interactive Training (IT), a logical extension to backpropagation training of artificial neural networks that employs interaction among multiple output nodes. IT methods allow output nodes to learn together to form more complex systems while not restraining their individual ability to specialize. Lazy training, an implementation of IT, is presented here as a novel objective function to be used in learning classification problems. It seeks to directly minimize classification error by backpropagating error only on misclassified samples from outputs that are responsible for the error. Lazy training discourages overfitting and is conducive to higher accuracy in classification problems than optimizing current objective functions, such as sum-squared-error (SSE) and cross-entropy (CE). Experiments on a large, real world OCR data set have shown lazy training to significantly reduce generalization error over an optimized backpropagation network minimizing SSE or CE from 2.14% and 1.90%, respectively, to 0.89%. Comparable results are achieved over eight data sets from the UC Irvine Machine Learning Database Repository, with an average increase in accuracy from 90.7% and 91.3% using optimized SSE and CE networks, respectively, to 92.1% for lazy training performing 10-fold stratified cross-validation. Analysis indicates that lazy training performs a fundamentally different search of the feature space than backpropagation optimizing SSE or CE and produces radically different solutions. These results are supported by the theoretical and conceptual progression from algorithmic to interactive training models.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A lazy learning approach for building classification models

In this paper we propose a lazy learning strategy for building classification learning models. Instead of learning the models with the whole training data set before observing the new instance, a selection of patterns is made depending on the new query received and a classification model is learnt with those selected patterns. The selection of patterns is not homogeneous, in the sense that the ...

متن کامل

ML-KNN: A lazy learning approach to multi-label learning

Multi-label learning originated from the investigation of text categorization problem, where each document may belong to several predefined topics simultaneously. In multi-label learning, the training set is composed of instances each associated with a set of labels, and the task is to predict the label sets of unseen instances through analyzing training instances with known label sets. In this...

متن کامل

Competence-guided Editing Methods for Lazy Learning

Lazy learning algorithms retain their raw training examples and defer all example-processing until problem solving time (eg, case-based learning, instance-based learning, and nearest-neighbour methods). A case-based classifier will typically compare a new target query to every case in its case-base (its raw training data) before deriving a target classification. This can make lazy methods prohi...

متن کامل

A Lazy Approach for Machine Learning Algorithms

Most machine learning algorithms are eager methods in the sense that a model is generated with the complete training data set and, afterwards, this model is used to generalize the new test instances. In this work we study the performance of different machine learning algorithms when they are learned using a lazy approach. The idea is to build a classification model once the test instance is rec...

متن کامل

Learning Lazy Rules to Improvethe Performance of Classi ersKai

Based on an earlier study on lazy Bayesian rule learning, this paper introduces a general lazy learning framework, called LazyRule, that begins to learn a rule only when classifying a test case. The objective of the framework is to improve the performance of a base learning algorithm. It has the potential to be used for diierent types of base learning algorithms. LazyRule performs attribute eli...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2002